non-native english speaker
Using Sentiment Analysis to Investigate Peer Feedback by Native and Non-Native English Speakers
Exline, Brittney, Duffin, Melanie, Harbison, Brittany, da Gomez, Chrissa, Joyner, David
Graduate-level CS programs in the U.S. increasingly enroll international students, with 60.2 percent of master's degrees in 2023 awarded to non-U.S. students. Many of these students take online courses, where peer feedback is used to engage students and improve pedagogy in a scalable manner. Since these courses are conducted in English, many students study in a language other than their first. This paper examines how native versus non-native English speaker status affects three metrics of peer feedback experience in online U.S.-based computing courses. Using the Twitter-roBERTa-based model, we analyze the sentiment of peer reviews written by and to a random sample of 500 students. We then relate sentiment scores and peer feedback ratings to students' language background. Results show that native English speakers rate feedback less favorably, while non-native speakers write more positively but receive less positive sentiment in return. When controlling for sex and age, significant interactions emerge, suggesting that language background plays a modest but complex role in shaping peer feedback experiences.
- North America > United States > Georgia > Fulton County > Atlanta (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Education > Educational Setting > Online (1.00)
- Education > Curriculum > Subject-Specific Education (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.34)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Enterprise Applications > Human Resources > Learning Management (0.88)
- Information Technology > Artificial Intelligence > Natural Language > Information Extraction (0.54)
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (0.54)
Impact of ChatGPT on the writing style of condensed matter physicists
Xu, Shaojun, Ye, Xiaohui, Zhang, Mengqi, Wang, Pei
We apply a state-of-the-art difference-in-differences approach to estimate the impact of ChatGPT's release on the writing style of condensed matter papers on arXiv. Our analysis reveals a statistically significant improvement in the English quality of abstracts written by non-native English speakers. Importantly, this improvement remains robust even after accounting for other potential factors, confirming that it can be attributed to the release of ChatGPT. This indicates widespread adoption of the tool. Following the release of ChatGPT, there is a significant increase in the use of unique words, while the frequency of rare words decreases. Across language families, the changes in writing style are significant for authors from the Latin and Ural-Altaic groups, but not for those from the Germanic or other Indo-European groups.
- South America > Venezuela (0.04)
- South America > Colombia (0.04)
- South America > Chile (0.04)
- (31 more...)
- Research Report > Experimental Study (0.99)
- Research Report > New Finding (0.93)
- Health & Medicine > Therapeutic Area (1.00)
- Education > Curriculum > Subject-Specific Education (0.93)
Native Design Bias: Studying the Impact of English Nativeness on Language Model Performance
Reusens, Manon, Borchert, Philipp, De Weerdt, Jochen, Baesens, Bart
Large Language Models (LLMs) excel at providing information acquired during pretraining on large-scale corpora and following instructions through user prompts. This study investigates whether the quality of LLM responses varies depending on the demographic profile of users. Considering English as the global lingua franca, along with the diversity of its dialects among speakers of different native languages, we explore whether non-native English speakers receive lower-quality or even factually incorrect responses from LLMs more frequently. Our results show that performance discrepancies occur when LLMs are prompted by native versus non-native English speakers and persist when comparing native speakers from Western countries with others. Additionally, we find a strong anchoring effect when the model recognizes or is made aware of the user's nativeness, which further degrades the response quality when interacting with non-native speakers. Our analysis is based on a newly collected dataset with over 12,000 unique annotations from 124 annotators, including information on their native language and English proficiency.
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Singapore (0.04)
- Asia > Indonesia > Bali (0.04)
- (8 more...)
WordDecipher: Enhancing Digital Workspace Communication with Explainable AI for Non-native English Speakers
Non-native English speakers (NNES) face challenges in digital workspace communication (e.g., emails, Slack messages), often inadvertently translating expressions from their native languages, which can lead to awkward or incorrect usage. Current AI-assisted writing tools are equipped with fluency enhancement and rewriting suggestions; however, NNES may struggle to grasp the subtleties among various expressions, making it challenging to choose the one that accurately reflects their intent. Such challenges are exacerbated in high-stake text-based communications, where the absence of non-verbal cues heightens the risk of misinterpretation. By leveraging the latest advancements in large language models (LLM) and word embeddings, we propose WordDecipher, an explainable AI-assisted writing tool to enhance digital workspace communication for NNES. WordDecipher not only identifies the perceived social intentions detected in users' writing, but also generates rewriting suggestions aligned with users' intended messages, either numerically or by inferring from users' writing in their native language. Then, WordDecipher provides an overview of nuances to help NNES make selections. Through a usage scenario, we demonstrate how WordDecipher can significantly enhance an NNES's ability to communicate her request, showcasing its potential to transform workspace communication for NNES.
- North America > United States > Maryland > Prince George's County > College Park (0.15)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.05)
- Asia > China (0.05)
- North America > United States > New York (0.04)
Ghostbuster: detecting text ghostwritten by large language models
Large language models like ChatGPT write impressively well--so well, in fact, that they've become a problem. Students have begun using these models to ghostwrite assignments, leading some schools to ban ChatGPT. In addition, these models are also prone to producing text with factual errors, so wary readers may want to know if generative AI tools have been used to ghostwrite news articles or other sources before trusting them. What can teachers and consumers do? Existing tools to detect AI-generated text sometimes do poorly on data that differs from what they were trained on.
- Media > Film (0.63)
- Leisure & Entertainment (0.63)
Programs to detect AI discriminate against non-native English speakers, shows study
Computer programs that are used to detect essays, job applications and other work generated by artificial intelligence can discriminate against people who are non-native English speakers, researchers say. Tests on seven popular AI text detectors found that articles written by people who did not speak English as a first language were often wrongly flagged as AI-generated, a bias that could have a serious impact on students, academics and job applicants. With the rise of ChatGPT, a generative AI program that can write essays, solve problems and create computer code, many teachers now consider AI detection as a "critical countermeasure to deter a 21st-century form of cheating", the researchers say, but they warn that the 99% accuracy claimed by some detectors is "misleading at best." Alex Hern's weekly dive in to how technology is shaping our lives Scientists led by James Zou, an assistant professor of biomedical data science at Stanford University, ran 91 English essays written by non-native English speakers through seven popular GPT detectors to see how well the programs performed. More than half of the essays, which were written for a widely recognised English proficiency test known as the Test of English as a Foreign Language, or TOEFL, were flagged as AI-generated, with one program flagging 98% of the essays as composed by AI.
- Europe > Middle East > Cyprus (0.07)
- North America > United States (0.06)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.63)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.57)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.41)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.40)
Tools to spot AI essays show bias against non-native English speakers
Working out who has produced work isn't always an easy matter Tools to detect if a body of English text has been written by humans or artificial intelligence exhibit bias against people whose primary language isn't English. The tests frequently misidentify their work as being created by an AI. Text-generating AI models such as OpenAI's ChatGPT and GPT-4 are being used by some students at schools and universities to create essays that they are passing off as their own work.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.35)